41 research outputs found
Inference for SDE models via Approximate Bayesian Computation
Models defined by stochastic differential equations (SDEs) allow for the
representation of random variability in dynamical systems. The relevance of
this class of models is growing in many applied research areas and is already a
standard tool to model e.g. financial, neuronal and population growth dynamics.
However inference for multidimensional SDE models is still very challenging,
both computationally and theoretically. Approximate Bayesian computation (ABC)
allow to perform Bayesian inference for models which are sufficiently complex
that the likelihood function is either analytically unavailable or
computationally prohibitive to evaluate. A computationally efficient ABC-MCMC
algorithm is proposed, halving the running time in our simulations. Focus is on
the case where the SDE describes latent dynamics in state-space models; however
the methodology is not limited to the state-space framework. Simulation studies
for a pharmacokinetics/pharmacodynamics model and for stochastic chemical
reactions are considered and a MATLAB package implementing our ABC-MCMC
algorithm is provided.Comment: Version accepted for publication in Journal of Computational &
Graphical Statistic
Classification using distance nearest neighbours
This paper proposes a new probabilistic classification algorithm using a
Markov random field approach. The joint distribution of class labels is
explicitly modelled using the distances between feature vectors. Intuitively, a
class label should depend more on class labels which are closer in the feature
space, than those which are further away. Our approach builds on previous work
by Holmes and Adams (2002, 2003) and Cucala et al. (2008). Our work shares many
of the advantages of these approaches in providing a probabilistic basis for
the statistical inference. In comparison to previous work, we present a more
efficient computational algorithm to overcome the intractability of the Markov
random field model. The results of our algorithm are encouraging in comparison
to the k-nearest neighbour algorithm.Comment: 12 pages, 2 figures. To appear in Statistics and Computin
Computational system identification of continuous-time nonlinear systems using approximate Bayesian computation
In this paper, we derive a system identification framework for continuous-time nonlinear systems, for the first time using a simulation-focused computational Bayesian approach. Simulation approaches to nonlinear system identification have been shown to outperform regression methods under certain conditions, such as non-persistently exciting inputs and fast-sampling. We use the approximate Bayesian computation (ABC) algorithm to perform simulation-based inference of model parameters. The framework has the following main advantages: (1) parameter distributions are intrinsically generated, giving the user a clear description of uncertainty, (2) the simulation approach avoids the difficult problem of estimating signal derivatives as is common with other continuous-time methods, and (3) as noted above, the simulation approach improves identification under conditions of non-persistently exciting inputs and fast-sampling. Term selection is performed by judging parameter significance using parameter distributions that are intrinsically generated as part of the ABC procedure. The results from a numerical example demonstrate that the method performs well in noisy scenarios, especially in comparison to competing techniques that rely on signal derivative estimation
Non-linear regression models for Approximate Bayesian Computation
Approximate Bayesian inference on the basis of summary statistics is
well-suited to complex problems for which the likelihood is either
mathematically or computationally intractable. However the methods that use
rejection suffer from the curse of dimensionality when the number of summary
statistics is increased. Here we propose a machine-learning approach to the
estimation of the posterior density by introducing two innovations. The new
method fits a nonlinear conditional heteroscedastic regression of the parameter
on the summary statistics, and then adaptively improves estimation using
importance sampling. The new algorithm is compared to the state-of-the-art
approximate Bayesian methods, and achieves considerable reduction of the
computational burden in two examples of inference in statistical genetics and
in a queueing model.Comment: 4 figures; version 3 minor changes; to appear in Statistics and
Computin
Predicting how many animals will be where: how to build, calibrate and evaluate individual-based models
Individual-based models (IBMs) can simulate the actions of individual animals as they interact with one another and the landscape in which they live. When used in spatially-explicit landscapes IBMs can show how populations change over time in response to management actions. For instance, IBMs are being used to design strategies of conservation and of the exploitation of fisheries, and for assessing the effects on populations of major construction projects and of novel agricultural chemicals. In such real world contexts, it becomes especially important to build IBMs in a principled fashion, and to approach calibration and evaluation systematically. We argue that insights from physiological and behavioural ecology offer a recipe for building realistic models, and that Approximate Bayesian Computation (ABC) is a promising technique for the calibration and evaluation of IBMs.
IBMs are constructed primarily from knowledge about individuals. In ecological applications the relevant knowledge is found in physiological and behavioural ecology, and we approach these from an evolutionary perspective by taking into account how physiological and behavioural processes contribute to life histories, and how those life histories evolve. Evolutionary life history theory shows that, other things being equal, organisms should grow to sexual maturity as fast as possible, and then reproduce as fast as possible, while minimising per capita death rate. Physiological and behavioural ecology are largely built on these principles together with the laws of conservation of matter and energy. To complete construction of an IBM information is also needed on the effects of competitors, conspecifics and food scarcity; the maximum rates of ingestion, growth and reproduction, and life-history parameters.
Using this knowledge about physiological and behavioural processes provides a principled way to build IBMs, but model parameters vary between species and are often difficult to measure. A common solution is to manually compare model outputs with observations from real landscapes and so to obtain parameters which produce acceptable fits of model to data. However, this procedure can be convoluted and lead to over-calibrated and thus inflexible models. Many formal statistical techniques are unsuitable for use with IBMs, but we argue that ABC offers a potential way forward. It can be used to calibrate and compare complex stochastic models and to assess the uncertainty in their predictions. We describe methods used to implement ABC in an accessible way and illustrate them with examples and discussion of recent studies. Although much progress has been made, theoretical issues remain, and some of these are outlined and discussed
Markov chains for genetics and extremes
Available from British Library Document Supply Centre-DSC:DXN049476 / BLDSC - British Library Document Supply CentreSIGLEGBUnited Kingdo